Teaching the Teachers: Lessons from the Generative AI Literacy Frontlines

“What is most exciting about this moment is seeing teachers energized by the possibilities AI offers to improve their practice and inspire students to engage with their learning in new ways.”

Photo of Shirin Mathew

Generative AI disrupted industries in a multitude of ways. For K-12 educators, they may have first encountered generative AI when receiving surprisingly college-level writing from an otherwise grade-level student. Many teachers had to figure out their response. Should they penalize the student for using a widely-available, free technology? Should they ban the use of generative AI? Do academic integrity policies and guidelines even address this?

Since 2022, teachers have had to navigate a whirlwind: from AI bans to district-provided AI access, from AI detectors to handle “cheating”, to the plethora of AI tools selling generative AI as a tool to streamline tasks, save time, and offer personalized instruction to every student. The marketing from edtech providers seemed to promise teachers finally getting the classroom sidekick of their dreams.

In reality, the landscape is much more complex and nuanced than the marketing hype suggests, and teachers are overwhelmed by the options and impact of AI on students. As a former high school science teacher now working with a team of current and former educators at Advanced Learning Partnerships (ALP), I’ve seen this complexity up close. Together, we’ve spent recent  years listening to teachers, testing strategies, and translating what we’ve learned into practical guidance. Through this work on the frontlines of AI literacy, I’ve identified five key insights that are helping schools move forward with clarity and confidence in this rapidly evolving space.

Insight #1: Start with Understanding, not Tools

Generative AI literacy is more than a buzzword—at ALP, we believe it is the cornerstone of AI adoption in schools. For clarity, let’s define what generative AI literacy covers. It is understanding the following:

  • What generative AI is and how it differs from predictive AI
  • How generative AI models are built, trained, and evolve
  • The inherent risks, limitations, and ethical considerations in AI development and use
  • The capabilities and potential impacts of AI systems across education, society, and young people, specifically
  • How to interact with AI systems critically and responsibly, while adapting practices as AI evolves

Without this fundamental understanding, educators lack the knowledge to be critical consumers of technology and limit our use of this powerful technology for low-level applications. This excellent sketch video which I share with educators during professional learning sessions, compares low-level application of AI to having Albert Einstein in your pocket, but only asking him to solve a fifth-grade math problem. When we focus on basic content creation or don’t invest time in learning effective prompting, we become jaded by poor results.

Before integrating generative AI in education, we must understand this technology. We must acknowledge the power of its capabilities while recognizing issues that require critical evaluation. With this knowledge, teachers can responsibly interact with AI tools and be more discerning about when, where, and why they should use it to elevate their instructional practices.

When AI generates sophisticated content within seconds, it’s tempting to simply copy-paste.

 

Insight #2: Establish Clear Guidance and Support Systems

While developing AI literacy is foundational, teachers need administrative support to implement AI effectively. Many educators rightfully hesitate to use AI without clear guidelines on privacy, student data protection, and appropriate classroom applications. At ALP, we’ve discovered that successful AI integration begins at the leadership level. Before working with teachers on generative AI literacy, our executive consulting team collaborates with district leaders to establish guiding principles and policies that create a secure foundation for AI adoption. The most successful initiatives we’ve seen involve teachers, students, and parents in the policy development process, creating stakeholder buy-in from the start.

In one North Carolina district, an AI teacher cohort developed stakeholder-specific versions of their district’s AI guidelines—making them accessible for students, parents, and support staff. This clarity gave teachers confidence, students the guidance they were asking for, and the whole community a shared language to talk about AI in education and stay informed.

These guidance documents must be living, not static. The most forward-thinking districts establish regular review cycles, collecting implementation data and revising practices based on classroom realities rather than theoretical concerns. This proactive approach prevents the all-too-common reactive scramble following incidents and this proactive approach builds institutional knowledge over time. Schools that commit to this iterative process develop increasingly sophisticated AI integration that truly serves their educational mission.

Insight #3: Combat the Eliza Effect by Prioritizing Human Judgment

Here’s an ironic insight: We teach critical thinking, yet both teachers and students fall prey to the “Eliza effect”—our human tendency to attribute expertise and validity to AI systems. The Eliza effect whispers: “The AI sounds good. Must be good enough.” But your human voice is better.

When AI generates sophisticated content within seconds, it’s tempting to simply copy-paste. I’ve seen teachers create parent communications or lesson plans with AI, and then… stop. They’re proud of the time saved, but I remind them the clock is still running. The time saved is in drafting, but critical evaluation is essential. Otherwise, what gets copied is often impersonal or worse, inaccurate and counterproductive to the teacher’s initial goals.

To avoid this, I coach teachers to review AI output with these questions and iterate accordingly:

  • Does the AI output match my teaching context accurately? For example, if my instructional goal is to have students demonstrate a skill, and the AI output for a lesson plan asks students to merely recall knowledge, the content generated does not match the teaching content.
  • Is the AI output factual, unbiased, and balanced? For example, if an AI-generated resource labels all fats as “bad for you and should be avoided.” This is an oversimplification and outdated nutrition advice. A balanced, accurate version would distinguish between healthy fats and trans fats, helping students make informed dietary decisions.
  • Is the AI output validating my thoughts without challenge? AI tends to be a sycophant (a yes-man) For example, when using AI to generate student feedback, teachers should be cautious—if you tell AI “this student’s writing is really strong” the AI evaluation for feedback might produce overly positive comments and ignore areas for growth rather than offering balanced, constructive feedback.
  • Does this instructional content reflect my personal experiences and voice like the rest of my teaching? AI-generated content can sound robotic, but teaching has a strong relationship component. As teachers, we should consider adding our own personal stories, jokes, and knowledge that build connections with our students and make our classroom uniquely personal to the humans creating it.

I believe so strongly in this part of the AI interaction process that I try to build it into every session where teachers collaborate with AI. This critical evaluation skill requires practice and attention, but teachers who do this consistently get significantly better outcomes from AI.

Successful assessment redesign happens through small, deliberate steps rather than overnight revolution.

 

Insight #4: Reimagining Assessments is a Long Game 

AI challenges the way we’ve traditionally assessed learning. In response, some teachers have banned its use entirely, requiring assessments be done in controlled environments. Others have embraced AI use for specific tasks with clear parameters (see AI Use Scale from Leon Furze et al.), while a few allow full AI integration with transparency and reflection built in.

Each approach creates a different student experience and much of a teacher’s decision is based on their comfort level with preparing students to use AI. A no-AI environment avoids the complexity of teaching AI skills but may leave students unprepared for future realities. Classrooms that integrate AI must deliberately teach prompting, critical evaluation, and constructive AI collaboration—essential skills that don’t develop automatically but require modeling, practice and an environment of trust.

Successful assessment redesign happens through small, deliberate steps rather than overnight revolution. Teachers in our cohorts have found creative approaches like:

  • Using AI to quickly generate differentiated formative assessments; a task that normally takes a lot of time and energy.
  • Having students “find and fix” hallucinations in math-based problems as part of their learning; this also does the double job of peeling back the curtain on AI to show students that AI is merely an imperfect machine that needs to be scrutinized.
  • Generating AI feedback based on a teacher’s rubric and sample grading style to save time without losing quality; this still requires oversight but it helps teachers with the age-old challenge of providing timely feedback.

Here’s a story I love to share from a coaching conversation within a recent partnership: One particularly innovative physics teacher wanted to test custom AI tutors to help close achievement gaps. What he discovered surprised him: struggling students avoided the AI tutors completely, while high achievers saw no need for them. He quickly identified that the real issue wasn’t personalization—it was engagement. Struggling students didn’t find the AI tutor to be helpful because they were already checked out from the material and the AI tutor didn’t offer a new lens to re-engage them. The teacher thought about what has been previously successful in engaging his students: games. By pivoting to AI-enhanced gamification (choose-your-own adventures, escape rooms, and group challenges), he saw immediate improvement in student participation across all achievement levels. He is now collecting data on whether these AI-enhanced games translate to academic growth. The takeaway: By correctly identifying your specific classroom challenge rather than forcing an AI solution, you can find a solution that is student-centered and can move learning forward. The path forward isn’t through sweeping overhauls but through intentional experimentation and adaptation. Start with your current classroom challenges, leverage AI to amplify what already works, and measure the impact. Small, focused innovations often yield more meaningful results than revolutionary redesigns.

Insight #5: Building AI Fluency Through Community

One-off training rarely transforms practice. The most powerful professional growth in genAI literacy we’ve seen occurs in communities of practice—teacher cohorts who explore, experiment, and reflect together. These professional learning communities provide both accountability and inspiration as educators move from basic literacy to true AI fluency. Many schools begin by forming an AI task force of teachers identified as early adopters. While well-intentioned, these committees often struggle as the work competes with existing responsibilities. When we work alongside these teams, we help prioritize this exploration and provide the structure, resources, and visioning needed for sustainable progress. The educators making the most impressive strides are those who are supported to collaborate regularly, share successes and failures, and push each other’s thinking about AI’s role in education.

By correctly identifying your specific classroom challenge rather than forcing an AI solution, you can find a solution that is student-centered and can move learning forward.

 

Why GenAI Literacy Matters

I’ve never promised educators that AI will be a magic pill or transform education. What is most exciting about this moment is seeing teachers energized by the possibilities AI offers to improve their practice and inspire students to engage with their learning in new ways.

The insights the ALP team has gathered reflect a priority on understanding AI’s risks and opportunities, establishing clear guidance for its ethical use, maintaining human judgment while engaging with tools and platforms, reimagining assessments thoughtfully, and creating supportive communities. ALP’s professional learning is not about chasing every new tool or trend. Instead, our approach remains focused on helping educators ask better questions, make intentional choices, and stay rooted in what matters most: students, learning, and strong pedagogical practice. Generative AI may be new, but the core work of educators—connection, reflection, and growth—hasn’t changed.

This story was written by Shirin MathewFollow Shirin on LinkedIn

Otter AI and Claude 3.7 Sonnet were used to provide feedback and suggestions for this post.

For downloadable resources related to communicating and evaluating AI use in education, explore best practices for AI professional learning in K-12 education.

ALP’s professional learning institutes, coaching cohorts, and blended learning experiences support teachers’ professional learning for ethical and effective AI use. Contact us to learn more.

Want to prepare your district for AI adoption? Discover how ALP’s AI professional learning services help K-12 districts develop AI policies and leadership frameworks.

 


Back to Top
css.php